|
Neurocomputational speech processing is computer-simulation of speech production and speech perception by referring to the natural neuronal processes of speech production and speech perception, as they occur in the human nervous system (central nervous system and peripheral nervous system). This topic is based on neuroscience and computational neuroscience.〔Rouat J, Loiselle S, Pichevar R (2007) Towards neurocomputational speech and sound processing. In: Sytylianou Y, Faundez-Zanuy M, Esposito A. ''Progress in Nonlinear Speech Processing'' (Springer, Berlin) pp. 58-77. (ACMDL )〕 == Overview == Neurocomputational models of speech processing are complex. They comprise at least a cognitive part, a motor part and a sensory part. The cognitive or linguistic part of a neurocomputational model of speech processing comprises the neural activation or generation of a phonemic representation on the side of speech production (e.g. neurocomputational and extended version of the Levelt model developed by Ardi Roelofs:〔(Ardi Roelofs )〕 WEAVER++〔(WEAVER++ )〕 as well as the neural activation or generation of an intention or meaning on the side of speech perception or speech comprehension. The motor part of a neurocomputational model of speech processing starts with a phonemic representation of a speech item, activates a motor plan and ends with the articulation of that particular speech item (see also: articulatory phonetics). The sensory part of a neurocomputational model of speech processing starts with an acoustic signal of a speech item (acoustic speech signal), generates an auditory representation for that signal and activates a phonemic representations for that speech item. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Neurocomputational speech processing」の詳細全文を読む スポンサード リンク
|